ai 2027
- Europe > Ukraine (0.07)
- North America > United States > New York (0.06)
- Oceania > Australia (0.05)
- Asia > China (0.05)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.37)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.37)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.37)
Will Humanity Be Rendered Obsolete by AI?
Louadi, Mohamed El, Romdhane, Emna Ben
This article analyzes the existential risks artificial intelligence (AI) poses to humanity, tracing the trajectory from current AI to ultraintelligence. Drawing on Irving J. Good and Nick Bostrom's theoretical work, plus recent publications (AI 2027; If Anyone Builds It, Everyone Dies), it explores AGI and superintelligence. Considering machines' exponentially growing cognitive power and hypothetical IQs, it addresses the ethical and existential implications of an intelligence vastly exceeding humanity's, fundamentally alien. Human extinction may result not from malice, but from uncontrollable, indifferent cognitive superiority.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > Illinois > Cook County > Chicago (0.05)
- Africa > Middle East > Tunisia > Tunis Governorate > Tunis (0.04)
- (5 more...)
- Research Report (1.00)
- Overview (1.00)
- Banking & Finance > Economy (0.46)
- Health & Medicine > Therapeutic Area (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.95)
The AI Doomers Are Getting Doomier
Nate Soares doesn't set aside money for his 401(k). "I just don't expect the world to be around," he told me earlier this summer from his office at the Machine Intelligence Research Institute, where he is the president. A few weeks earlier, I'd heard a similar rationale from Dan Hendrycks, the director of the Center for AI Safety. By the time he could tap into any retirement funds, Hendrycks anticipates a world in which "everything is fully automated," he told me. That is, "if we're around."
- North America > United States > New York (0.05)
- North America > United States > California (0.04)
- Government (1.00)
- Media (0.69)
- Information Technology (0.69)
Two Paths for A.I.
Last spring, Daniel Kokotajlo, an A.I.-safety researcher working at OpenAI, quit his job in protest. He'd become convinced that the company wasn't prepared for the future of its own technology, and wanted to sound the alarm. After a mutual friend connected us, we spoke on the phone. I found Kokotajlo affable, informed, and anxious. Advances in "alignment," he told me--the suite of techniques used to insure that A.I. acts in accordance with human commands and values--were lagging behind gains in intelligence.
- Asia > China (0.05)
- North America > United States > California (0.05)
- Asia > Taiwan (0.05)